skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Reimann, Martin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available May 12, 2026
  2. As generative artificial intelligence (AI) has found its way into various work tasks, questions about whether its usage should be disclosed and the consequences of such disclosure have taken center stage in public and academic discourse on digital transparency. This article addresses this debate by asking: Does disclosing the usage of AI compromise trust in the user? We examine the impact of AI disclosure on trust across diverse tasks—from communications via analytics to artistry—and across individual actors such as supervisors, subordinates, professors, analysts, and creatives, as well as across organizational actors such as investment funds. Thirteen experiments consistently demonstrate that actors who disclose their AI usage are trusted less than those who do not. Drawing on micro-institutional theory, we argue that this reduction in trust can be explained by reduced perceptions of legitimacy, as shown across various experimental designs (Studies 6–8). Moreover, we demonstrate that this negative effect holds across different disclosure framings, above and beyond algorithm aversion, regardless of whether AI involvement is known, and regardless of whether disclosure is voluntary or mandatory, though it is comparatively weaker than the effect of third-party exposure (Studies 9–13). A within-paper meta-analysis suggests this trust penalty is attenuated but not eliminated among evaluators with favorable technology attitudes and perceptions of high AI accuracy. This article contributes to research on trust, AI, transparency, and legitimacy by showing that AI disclosure can harm social perceptions, emphasizing that transparency is not straightforwardly beneficial, and highlighting legitimacy’s central role in trust formation. 
    more » « less
    Free, publicly-accessible full text available May 1, 2026
  3. Are competent actors still trusted when they promote themselves? The answer to this question could have far-reaching implications for understanding trust production in a variety of economic exchange settings in which ability and impression management play vital roles, from succeeding in one’s job to excelling in the sales of goods and services. Much social science research assumes an unconditional positive impact of an actor’s ability on the trust placed in that actor: in other words, competence breeds trust. In this report, however, we challenge this assumption. Across a series of experiments, we manipulated both the ability and the self-promotion of a trustee and measured the level of trust received. Employing both online laboratory studies ( n = 5,606) and a field experiment ( n = 101,520), we find that impression management tactics (i.e., self-promotion and intimidation) can substantially backfire, at least for those with high ability. An explanation for this effect is encapsuled in attribution theory, which argues that capable actors are held to higher standards in terms of how kind and honest they are expected to be. Consistent with our social attribution account, mediation analyses show that competence combined with self-promotion decreases the trustee’s perceived benevolence and integrity and, in turn, the level of trust placed in that actor. 
    more » « less
  4. null (Ed.)
    Trust is key to understanding the dynamics of social relations, to the extent that it is often viewed as the glue that holds society together. We review the mounting sociological literature to help answer what trust is and where it comes from. To this end, we identify two research streams—on particularized trust and generalized trust, respectively—and propose an integrative framework that bridges these lines of research while also enhancing conceptual precision. This framework provides the springboard for identifying several important avenues for future research, including new investigations into the radius of trust, the intermediate form of categorical trust, and the interrelationships between different forms of trust. This article also calls for more scholarship focusing on the consequences (versus antecedents) of trust, addressing more fully the trustee side of the relation, and employing new empirical methods. Such novel approaches will ensure that trust research will continue to provide important insights into the functioning of modern society in the years to come. Expected final online publication date for the Annual Review of Sociology, Volume 47 is July 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates. 
    more » « less